24 research outputs found

    Harnessing function from form: towards bio-inspired artificial intelligence in neuronal substrates

    Get PDF
    Despite the recent success of deep learning, the mammalian brain is still unrivaled when it comes to interpreting complex, high-dimensional data streams like visual, auditory and somatosensory stimuli. However, the underlying computational principles allowing the brain to deal with unreliable, high-dimensional and often incomplete data while having a power consumption on the order of a few watt are still mostly unknown. In this work, we investigate how specific functionalities emerge from simple structures observed in the mammalian cortex, and how these might be utilized in non-von Neumann devices like “neuromorphic hardware”. Firstly, we show that an ensemble of deterministic, spiking neural networks can be shaped by a simple, local learning rule to perform sampling-based Bayesian inference. This suggests a coding scheme where spikes (or “action potentials”) represent samples of a posterior distribution, constrained by sensory input, without the need for any source of stochasticity. Secondly, we introduce a top-down framework where neuronal and synaptic dynamics are derived using a least action principle and gradient-based minimization. Combined, neurosynaptic dynamics approximate real-time error backpropagation, mappable to mechanistic components of cortical networks, whose dynamics can again be described within the proposed framework. The presented models narrow the gap between well-defined, functional algorithms and their biophysical implementation, improving our understanding of the computational principles the brain might employ. Furthermore, such models are naturally translated to hardware mimicking the vastly parallel neural structure of the brain, promising a strongly accelerated and energy-efficient implementation of powerful learning and inference algorithms, which we demonstrate for the physical model system “BrainScaleS–1”

    An energy-based model for neuro-symbolic reasoning on knowledge graphs

    Full text link
    Machine learning on graph-structured data has recently become a major topic in industry and research, finding many exciting applications such as recommender systems and automated theorem proving. We propose an energy-based graph embedding algorithm to characterize industrial automation systems, integrating knowledge from different domains like industrial automation, communications and cybersecurity. By combining knowledge from multiple domains, the learned model is capable of making context-aware predictions regarding novel system events and can be used to evaluate the severity of anomalies that might be indicative of, e.g., cybersecurity breaches. The presented model is mappable to a biologically-inspired neural architecture, serving as a first bridge between graph embedding methods and neuromorphic computing - uncovering a promising edge application for this upcoming technology.Comment: Accepted for publication at the 20th IEEE International Conference on Machine Learning and Applications (ICMLA 2021

    SpikE: spike-based embeddings for multi-relational graph data

    Full text link
    Despite the recent success of reconciling spike-based coding with the error backpropagation algorithm, spiking neural networks are still mostly applied to tasks stemming from sensory processing, operating on traditional data structures like visual or auditory data. A rich data representation that finds wide application in industry and research is the so-called knowledge graph - a graph-based structure where entities are depicted as nodes and relations between them as edges. Complex systems like molecules, social networks and industrial factory systems can be described using the common language of knowledge graphs, allowing the usage of graph embedding algorithms to make context-aware predictions in these information-packed environments. We propose a spike-based algorithm where nodes in a graph are represented by single spike times of neuron populations and relations as spike time differences between populations. Learning such spike-based embeddings only requires knowledge about spike times and spike time differences, compatible with recently proposed frameworks for training spiking neural networks. The presented model is easily mapped to current neuromorphic hardware systems and thereby moves inference on knowledge graphs into a domain where these architectures thrive, unlocking a promising industrial application area for this technology.Comment: Accepted for publication at IJCNN 202

    Differentiable graph-structured models for inverse design of lattice materials

    Full text link
    Architected materials possessing physico-chemical properties adaptable to disparate environmental conditions embody a disruptive new domain of materials science. Fueled by advances in digital design and fabrication, materials shaped into lattice topologies enable a degree of property customization not afforded to bulk materials. A promising venue for inspiration toward their design is in the irregular micro-architectures of nature. However, the immense design variability unlocked by such irregularity is challenging to probe analytically. Here, we propose a new computational approach using graph-based representation for regular and irregular lattice materials. Our method uses differentiable message passing algorithms to calculate mechanical properties, therefore allowing automatic differentiation with surrogate derivatives to adjust both geometric structure and local attributes of individual lattice elements to achieve inversely designed materials with desired properties. We further introduce a graph neural network surrogate model for structural analysis at scale. The methodology is generalizable to any system representable as heterogeneous graphs.Comment: Code: https://gitlab.com/EuropeanSpaceAgency/pylattice2

    Machine learning on knowledge graphs for context-aware security monitoring

    Full text link
    Machine learning techniques are gaining attention in the context of intrusion detection due to the increasing amounts of data generated by monitoring tools, as well as the sophistication displayed by attackers in hiding their activity. However, existing methods often exhibit important limitations in terms of the quantity and relevance of the generated alerts. Recently, knowledge graphs are finding application in the cybersecurity domain, showing the potential to alleviate some of these drawbacks thanks to their ability to seamlessly integrate data from multiple domains using human-understandable vocabularies. We discuss the application of machine learning on knowledge graphs for intrusion detection and experimentally evaluate a link-prediction method for scoring anomalous activity in industrial systems. After initial unsupervised training, the proposed method is shown to produce intuitively well-calibrated and interpretable alerts in a diverse range of scenarios, hinting at the potential benefits of relational machine learning on knowledge graphs for intrusion detection purposes.Comment: Accepted for publication at IEEE-CSR 2021. Data is available on https://github.com/dodo47/cyberM

    Totimorphic structures for space application

    Full text link
    We propose to use a recently introduced Totimorphic metamaterial for constructing morphable space structures. As a first step to investigate the feasibility of this concept, we present a method for morphing such structures autonomously between different shapes using physically plausible actuations, guaranteeing that the material traverses through valid configurations only while morphing. With this work, we aim to lay a foundation for exploring a promising and novel class of multi-functional, reconfigurable space structures.Comment: 4 pages, 2 figures, presented at the XXVII Italian Association of Aeronautics and Astronautics (AIDAA) Congress, 4-7 September 2023, Padova Ital

    Learning through structure: towards deep neuromorphic knowledge graph embeddings

    Full text link
    Computing latent representations for graph-structured data is an ubiquitous learning task in many industrial and academic applications ranging from molecule synthetization to social network analysis and recommender systems. Knowledge graphs are among the most popular and widely used data representations related to the Semantic Web. Next to structuring factual knowledge in a machine-readable format, knowledge graphs serve as the backbone of many artificial intelligence applications and allow the ingestion of context information into various learning algorithms. Graph neural networks attempt to encode graph structures in low-dimensional vector spaces via a message passing heuristic between neighboring nodes. Over the recent years, a multitude of different graph neural network architectures demonstrated ground-breaking performances in many learning tasks. In this work, we propose a strategy to map deep graph learning architectures for knowledge graph reasoning to neuromorphic architectures. Based on the insight that randomly initialized and untrained (i.e., frozen) graph neural networks are able to preserve local graph structures, we compose a frozen neural network with shallow knowledge graph embedding models. We experimentally show that already on conventional computing hardware, this leads to a significant speedup and memory reduction while maintaining a competitive performance level. Moreover, we extend the frozen architecture to spiking neural networks, introducing a novel, event-based and highly sparse knowledge graph embedding algorithm that is suitable for implementation in neuromorphic hardware.Comment: Accepted for publication at the International Conference on Neuromorphic Computing (ICNC 2021

    Detection, Explanation and Filtering of Cyber Attacks Combining Symbolic and Sub-Symbolic Methods

    Full text link
    Machine learning (ML) on graph-structured data has recently received deepened interest in the context of intrusion detection in the cybersecurity domain. Due to the increasing amounts of data generated by monitoring tools as well as more and more sophisticated attacks, these ML methods are gaining traction. Knowledge graphs and their corresponding learning techniques such as Graph Neural Networks (GNNs) with their ability to seamlessly integrate data from multiple domains using human-understandable vocabularies, are finding application in the cybersecurity domain. However, similar to other connectionist models, GNNs are lacking transparency in their decision making. This is especially important as there tend to be a high number of false positive alerts in the cybersecurity domain, such that triage needs to be done by domain experts, requiring a lot of man power. Therefore, we are addressing Explainable AI (XAI) for GNNs to enhance trust management by exploring combining symbolic and sub-symbolic methods in the area of cybersecurity that incorporate domain knowledge. We experimented with this approach by generating explanations in an industrial demonstrator system. The proposed method is shown to produce intuitive explanations for alerts for a diverse range of scenarios. Not only do the explanations provide deeper insights into the alerts, but they also lead to a reduction of false positive alerts by 66% and by 93% when including the fidelity metric.Comment: arXiv admin note: text overlap with arXiv:2105.0874

    Stochasticity from function -- why the Bayesian brain may need no noise

    Get PDF
    An increasing body of evidence suggests that the trial-to-trial variability of spiking activity in the brain is not mere noise, but rather the reflection of a sampling-based encoding scheme for probabilistic computing. Since the precise statistical properties of neural activity are important in this context, many models assume an ad-hoc source of well-behaved, explicit noise, either on the input or on the output side of single neuron dynamics, most often assuming an independent Poisson process in either case. However, these assumptions are somewhat problematic: neighboring neurons tend to share receptive fields, rendering both their input and their output correlated; at the same time, neurons are known to behave largely deterministically, as a function of their membrane potential and conductance. We suggest that spiking neural networks may, in fact, have no need for noise to perform sampling-based Bayesian inference. We study analytically the effect of auto- and cross-correlations in functionally Bayesian spiking networks and demonstrate how their effect translates to synaptic interaction strengths, rendering them controllable through synaptic plasticity. This allows even small ensembles of interconnected deterministic spiking networks to simultaneously and co-dependently shape their output activity through learning, enabling them to perform complex Bayesian computation without any need for noise, which we demonstrate in silico, both in classical simulation and in neuromorphic emulation. These results close a gap between the abstract models and the biology of functionally Bayesian spiking networks, effectively reducing the architectural constraints imposed on physical neural substrates required to perform probabilistic computing, be they biological or artificial

    Neuro-symbolic computing with spiking neural networks

    Full text link
    Knowledge graphs are an expressive and widely used data structure due to their ability to integrate data from different domains in a sensible and machine-readable way. Thus, they can be used to model a variety of systems such as molecules and social networks. However, it still remains an open question how symbolic reasoning could be realized in spiking systems and, therefore, how spiking neural networks could be applied to such graph data. Here, we extend previous work on spike-based graph algorithms by demonstrating how symbolic and multi-relational information can be encoded using spiking neurons, allowing reasoning over symbolic structures like knowledge graphs with spiking neural networks. The introduced framework is enabled by combining the graph embedding paradigm and the recent progress in training spiking neural networks using error backpropagation. The presented methods are applicable to a variety of spiking neuron models and can be trained end-to-end in combination with other differentiable network architectures, which we demonstrate by implementing a spiking relational graph neural network.Comment: Accepted for publication at the International Conference on Neuromorphic Systems (ICONS) 202
    corecore